Modernize polynomial_custom_function: torch.accelerator and setup_context#3885
Modernize polynomial_custom_function: torch.accelerator and setup_context#3885jvz37 wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3885
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit e594ec0 with merge base 1655d0a ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Note: the commit author shows |
|
Claude finished @sekyondaMeta's task in 1m 44s —— View job Review of PR #3885
SummaryThis PR modernizes Issues Found1. Trailing whitespace on line 39 (minor, will likely fail CI lint) return 0.5 * (5 * input ** 3 - 3 * input)
←── trailing spaces here
@staticmethodLine 39 contains trailing whitespace between the 2. Device selection pattern inconsistency with sibling tutorial (nit) The sibling file device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"
print(f"Using {device} device")
torch.set_default_device(device)This PR's version wraps the same logic in parentheses (multi-line) which is fine stylistically, but it omits the explanatory comment and
This is purely a suggestion for tutorial pedagogical consistency — the code is functionally correct without it. What Looks Good
VerdictLooks good overall. The trailing whitespace on line 39 should be fixed before merge (it will likely fail lint CI). The other suggestion about matching the sibling tutorial's style is optional. |
Fixes #3880
Description
Modernizes the custom autograd function tutorial by:
torch.device("cpu")withtorch.acceleratorfor accelerator-agnostic device selectionforward(ctx, input)intoforward(input)+setup_context(ctx, inputs, output)per PyTorch 2.0+ recommended patternTested locally: 2000 iterations, loss converged, output identical to original.
Checklist